Easy2Siksha.com
GNDU QUESTION PAPERS 2023
BA/BSc 4
th
SEMESTER
QUANTITATIVE TECHNIQUES - IV
Time Allowed: 3 Hours Maximum Marks:
Note: Aempt Five quesons in all, selecng at least One queson from each secon. The
Fih queson may be aempted from any secon. All quesons carry equal marks.
SECTION–A
1.(a) Discuss the procedure to t modied exponenal curve.
(b) What do you mean by paral correlaon coecient ? Determine paral correlaon
coecient

and mulple correlaon coecient

from the following data :






2. For the following data, t mulple linear regression equaon of
on
and
:
X₁
3
5
6
12
14
2
X₂
10
12
7
6
8
3
X₃
9
10
54
42
30
12
SECTION–B
3.(a) State and prove Bayes’ theorem of probability.
(b) What do you mean by mathemacal expectaon ? Also explain the properes of
mathemacal expectaons.
4.(a) Explain the rules of addion and mulplicaon in theory of probability.
(b) What do you mean by random variable ? Explain it with example. Also explain the
dierence between probability mass funcon and probability density funcon.
Easy2Siksha.com
SECTION-C
5. What is normal distribuon ? Draw a rough sketch of its probability density funcon.
Also give important properes of normal distribuon.
6. Dene Poisson distribuon. Under what condions is it applicable? Obtain Poisson
distribuon as a liming case of binomial distribuon. Also discuss the properes of
Poisson distribuon.
SECTION-D
7.(a) Disnguish between a census and a sampling enquiry and discuss their comparave
advantages.
(b) How will you select a sample using straed random sampling?
8.(a) Disnguish between random sampling and subjecve sampling. How will you select a
sample from the populaon using simple random sampling technique (with replacement
and without replacement)?
(b) Explain the concept of standard error of esmates.
Easy2Siksha.com
GNDU ANSWER PAPERS 2023
BA/BSc 4
th
SEMESTER
QUANTITATIVE TECHNIQUES - IV
Time Allowed: 3 Hours Maximum Marks:
Note: Aempt Five quesons in all, selecng at least One queson from each secon. The
Fih queson may be aempted from any secon. All quesons carry equal marks.
SECTION–A
1.(a) Discuss the procedure to t modied exponenal curve.
(b) What do you mean by paral correlaon coecient ? Determine paral correlaon
coecient

and mulple correlaon coecient

from the following data :






Ans: (a) Procedure to Fit a Modified Exponential Curve
In statistics, we often try to fit a mathematical curve to real-life data so that we can
understand the pattern and make predictions. Sometimes, data grows not in a straight line,
but slowly at first, then faster, and then stabilizes. Such growth patterns are beautifully
explained using exponential curves.
But in many practical situations, the simple exponential curve
does not work well. So, statisticians use a modified exponential curve, commonly written
as:

This curve is called “modified” because instead of starting at zero, it allows a baseline value
‘a’ from which growth starts. This makes it much more realistic for economic, biological,
population and business data.
How do we fit this modified exponential curve?
Easy2Siksha.com
Fitting simply means:
“Finding the values of a, b, and c so that the curve best represents the given data.”
Since this equation is not linear, direct application of least squares method is difficult. So,
statisticians apply a clever trick they transform it into a linear form.
Step-by-step procedure:
Start with the equation

Make substitution to simplify
Let
so the model becomes

Take logarithms on both sides
 
Now the equation looks like a straight-line equation:

where
  
Now we can apply least squares method, just like fitting a straight line.
After solving, we get values of:
(from original baseline)
antilog󰇛󰇜
 antilog󰇛󰇜
Finally substitute these values in:
Easy2Siksha.com

and the modified exponential curve is fitted.
So, instead of struggling with a complicated non-linear curve, we convert it cleverly, solve
using straight line method, and then convert back. That’s the beauty of mathematics!
(b) Meaning of Partial Correlation Coefficient
Now let’s move to the second part.
In real life, many variables are interconnected. For example:
Marks depend on intelligence, study time, and health.
Sales depend on price, advertisement, and income.
Height may relate to weight but also depends on age.
Suppose we want to know the relationship between X₁ and X₂, but there is a third variable
X₃, which also influences them. The ordinary correlation between X₁ and X₂ may be
misleading because part of the relationship may actually be due to X₃.
So, we remove the effect of X₃ and then check how strongly X₁ and X₂ are related.
This pure relationship after removing influence of third variable is called:
Partial Correlation Coefficient
Specifically,

means correlation between variable 1 and 2, after eliminating the effect of variable 3.
If

is small, it means most of their relation was actually due to the third variable.
Formula for Partial Correlation




󰇛

󰇜󰇛

󰇜
Given:






Easy2Siksha.com
Step-by-Step Calculation
Numerator



󰇛󰇜  
Denominator
󰇛

󰇜󰇛

󰇜
󰇛
󰇜󰇛
󰇜
󰇛
󰇜󰇛
󰇜
 
Final Answer




So, after removing the effect of the third variable, the correlation between variables 1 and 2
becomes very weak (0.14).
This means earlier correlation (0.5) was largely influenced by variable 3.
Multiple Correlation Coefficient
Now imagine we want to know how well two variables together explain the behavior of a
third variable.
For example:
How well income and education together explain standard of living?
How well advertising and price together explain sales?
This is measured using multiple correlation coefficient.
Here we find:

which means correlation between variable 1 and both 2 and 3 together.
Formula
Easy2Siksha.com








Substitute Values








󰇛󰇜󰇛󰇜󰇛󰇜 󰇛󰇜 
Numerator
 
Denominator

 
So





 
So the multiple correlation between variable 1 and the set of variables (2 and 3) together is
0.707, which shows a fairly strong joint influence.
Conclusion (Simple Understanding)
The modified exponential curve is a mathematical tool used to fit data which grows
in a gradual but accelerating manner. We transform it into linear form, apply least
squares, and then reconvert.
Partial correlation tells us the “true” relationship between two variables after
removing the impact of the third.
Multiple correlation tells us how strongly one variable is influenced by two variables
together.
Final Numerical Answers:




Easy2Siksha.com
So, the first relationship becomes weak after removing third variable, while the combined
effect of two variables on first is quite strong.
2. For the following data, t mulple linear regression equaon of
on
and
:
X₁
3
5
6
12
14
2
X₂
10
12
7
6
8
3
X₃
9
10
54
42
30
12
Ans: 🌟 Understanding the Problem
We are asked to fit a multiple linear regression equation of
on
and
. In statistics,
regression analysis is a way of modeling the relationship between a dependent variable
(here
) and one or more independent variables (here
and
).
👉 In simple words: We want to see how changes in
and
affect
, and then express
this relationship as a mathematical equation.
🌟 The Data
We are given:
Observation
1
3
10
9
2
5
12
10
3
6
7
54
4
12
6
42
5
14
8
30
6
2
3
12
Here,
is the dependent variable, while
and
are independent variables.
🌟 The Regression Equation
The general form of a multiple linear regression equation is:
= intercept (constant term)
= coefficient of
= coefficient of
After performing regression analysis, the fitted equation is:
Easy2Siksha.com


🌟 Interpreting the Coefficients
1. Intercept (
)
o This is the baseline value of
when both
and
are zero.
o In practice, it represents the starting point of the regression line.
2. Coefficient of
(
)
o For every unit increase in
,
increases by about 0.27 units, keeping
constant.
o This shows a positive relationship between
and
.
3. Coefficient of
(
)
o For every unit increase in
,
increases by about 0.15 units, keeping
constant.
o This also shows a positive relationship, though weaker compared to
.
🌟 Model Summary (Key Points)
R-squared = 0.317
o This means that about 31.7% of the variation in
is explained by
and
.
o The remaining variation is due to other factors not included in the model.
Adjusted R-squared = -0.138
o Since the sample size is very small (only 6 observations), the adjusted R-
squared is negative, showing that the model may not generalize well.
F-statistic = 0.6974, p-value = 0.564
o The overall model is not statistically significant at conventional levels.
o This is expected because of the small dataset.
👉 In simple words: The equation works mathematically, but with such limited data, we
cannot claim strong predictive power.
🌟 Step-by-Step Explanation for Students
1. Identify Variables:
o Dependent variable:
o Independent variables:
2. Set Up Equation:
o General form:
3. Use Regression Analysis:
o Apply least squares method to estimate coefficients.
o This minimizes the difference between actual and predicted values.
4. Obtain Coefficients:
o
,
,
.
5. Write Final Equation:
o


.
🌟 Practical Example
Easy2Siksha.com
Imagine
represents sales,
represents advertising spend, and
represents number of
salespeople.
The equation tells us that increasing advertising spend by 1 unit increases sales by
0.27 units.
Increasing the number of salespeople by 1 unit increases sales by 0.15 units.
Even without advertising or salespeople, sales start at about 0.94 units (the
intercept).
This shows how regression helps businesses understand the impact of different factors on
outcomes.
🌟 Limitations of the Model
Small Sample Size: Only 6 observations, which is too few for reliable conclusions.
Low R-squared: The model explains only 31.7% of variation.
Not Statistically Significant: The p-values are high, meaning the relationships may be
due to chance.
👉 In real-world research, larger datasets are needed for stronger results.
🌟 Conclusion
The fitted regression equation is:


This equation shows that both
and
positively influence
, though the model’s
predictive power is limited due to the small dataset.
SECTION–B
3.(a) State and prove Bayes’ theorem of probability.
(b) What do you mean by mathemacal expectaon ? Also explain the properes of
mathemacal expectaons.
Ans: (a) Bayes’ Theorem – Meaning, Idea, Statement and Proof
Imagine you are a doctor. A patient visits you with symptoms of fever and headache. Now
you have to decide:
Is it malaria, typhoid, or just a viral fever?
You don’t guess randomly. You use your past knowledge (like how many patients normally
get malaria, how many get typhoid, etc.) and then you also see the symptom and update
your belief accordingly.
Easy2Siksha.com
This is exactly what Bayes’ Theorem does in probability.
It helps us revise our earlier (prior) probability after getting new information (evidence).
So, in simple words:
Meaning of Bayes’ Theorem
Bayes’ Theorem tells us:
“If we already know the probability of different possible events, and after that we get
additional information, then Bayes’ theorem helps us update or revise the earlier
probabilities based on this new information.”
In even simpler language:
It helps us find the probability of a cause when we already know the result.
Let’s understand with an example
Suppose there are two factories:
Factory A makes 60% pens
Factory B makes 40% pens
Factory A produces 2% defective pens.
Factory B produces 5% defective pens.
If you pick a pen randomly and it turns out to be defective, Bayes’ Theorem helps you
answer:
👉 “What is the probability that this defective pen came from Factory B?”
So we move from result → finding cause.
That is the beauty of Bayes’ Theorem.
Formal Statement
Let
B₁, B₂, B₃, …, Bₙ be a set of events which form a partition of the sample space (that means
one of them must occur and no two can happen together).
Let A be an event which depends on these B’s.
Then, according to Bayes’ Theorem:
Easy2Siksha.com
󰇛
󰇜
󰇛
󰇜󰇛
󰇜
󰇛

󰇜󰇛
󰇜
In simple English,
Probability of cause Bi happening after event A has occurred =
Prior probability of Bi Probability of A given Bi
Total probability of A
Proof of Bayes’ Theorem (Very Simple Way)
We know from the definition of conditional probability:
󰇛
󰇜
󰇛
󰇜
󰇛󰇜
Now,
󰇛
󰇜 󰇛
󰇜󰇛
󰇜
So substitute it:
󰇛
󰇜
󰇛
󰇜󰇛
󰇜
󰇛󰇜
Now we must find P(A).
Since A may occur due to B₁ or B₂ or … or Bₙ,
󰇛󰇜 󰇛
󰇜󰇛
󰇜󰇛
󰇜󰇛
󰇜󰇛
󰇜󰇛
󰇜
Substituting this in the earlier equation gives
󰇛
󰇜
󰇛
󰇜󰇛
󰇜
󰇛

󰇜󰇛
󰇜
And this is Bayes’ Theorem proved!
Easy2Siksha.com
So the proof is nothing but simple use of
conditional probability
total probability theorem
Conclusion of Bayes’ Theorem
Bayes’ Theorem is extremely powerful because it connects past knowledge with present
evidence to give us updated probability. It is used in medical diagnosis, weather prediction,
machine learning, spam filters, court judgments, and many real-life situations.
(b) Mathematical Expectation Meaning and Properties
Now let us move to the second part.
Meaning of Mathematical Expectation
Suppose you are playing a game of chance, like tossing a coin to win money. You may lose
sometimes and win sometimes. But if you play this game many times, you would want to
know:
👉 “On average, how much can I expect to gain or lose?”
This average is called Mathematical Expectation.
In simple words:
Mathematical Expectation is the average value of a random variable that we expect to get
in the long run.
If a random variable X takes values
x₁, x₂, x₃, …, xₙ with probabilities
p₁, p₂, p₃, …, pₙ respectively,
Then mathematical expectation,
󰇛󰇜
So expectation = Σ (value × probability)
Example
Easy2Siksha.com
If a coin is tossed,
Winning ₹2 when Head comes
Losing ₹1 when Tail comes
Then expectation =
󰇛󰇜 󰇛󰇜󰇛󰇜󰇛󰇜  
So on average, you gain ₹0.5 per toss.
Properties of Mathematical Expectation
Now let us see some important properties in simple language.
1. Linearity of Expectation
If X and Y are random variables and a, b are constants then:
󰇛󰇜 󰇛󰇜󰇛󰇜
Meaning:
Expectation behaves like normal algebra.
You can take constants outside and separate expectations.
2. Expectation of a Constant
If C is a constant number, then:
󰇛󰇜
Because constant does not change, so its average is itself.
3. Expectation of Sum
󰇛󰇜 󰇛󰇜󰇛󰇜
Easy2Siksha.com
This is very powerful because it does not even need independence.
4. Expectation of Difference
󰇛󰇜 󰇛󰇜󰇛󰇜
Just like normal algebra.
5. Expectation of Product
If X and Y are independent,
󰇛󰇜 󰇛󰇜󰇛󰇜
But remember, this holds only when X and Y are independent.
6. Expectation Shows Long Run Behaviour
Expectation tells us what happens “on average” if an experiment is repeated many times. It
is not guaranteed in one trial, but it is true in the long run.
Final Conclusion
Bayes’ Theorem and Mathematical Expectation are two very important concepts in
probability.
Bayes theorem helps us revise probability when new information is available. It works
from result to cause and is widely used in real life.
Mathematical Expectation tells us the average outcome of a random experiment in the
long run. It is like the “center of gravity” of probabilities and has many useful properties
which make calculations easy.
Easy2Siksha.com
4.(a) Explain the rules of addion and mulplicaon in theory of probability.
(b) What do you mean by random variable ? Explain it with example. Also explain the
dierence between probability mass funcon and probability density funcon.
Ans: Bayes’ theorem and mathematical expectation
Bayes’ theorem of probability
Statement
Bayes’ theorem gives a way to invert conditional probabilities. Suppose the sample space is
partitioned into mutually exclusive and exhaustive events
with 󰇛
󰇜 , and
let be any event with 󰇛󰇜 . Then, for each :
󰇛
󰇜
󰇛
󰇜󰇛
󰇜
󰇛

󰇜󰇛
󰇜
In words: posterior = likelihood × prior / evidence. The numerator captures how compatible
is with
weighted by the prior chance of
; the denominator averages that compatibility
across all competing causes
.
Proof
The proof uses the definitions of conditional probability and the law of total probability.
Start with conditional probability:
󰇛
󰇜
󰇛
󰇜
󰇛󰇜
Rewrite the numerator using conditional probability:
󰇛
󰇜 󰇛
󰇜󰇛
󰇜
Rewrite the denominator using the law of total probability: Because
partition the space,
󰇛󰇜 󰇛

󰇜 󰇛

󰇜󰇛
󰇜
Combine these:
󰇛
󰇜
󰇛
󰇜󰇛
󰇜
󰇛

󰇜󰇛
󰇜
Easy2Siksha.com
This completes the proof.
A simple, intuitive example
Imagine three factories
produce identical bulbs in proportions , , and
. The defect rates (likelihoods) are 󰇛defect
󰇜 , 󰇛defect
󰇜 ,
󰇛defect
󰇜 . You pick one bulb at random and it is defective (). What is the
chance it came from factory
?
Prior: 󰇛
󰇜 .
Likelihood: 󰇛
󰇜 .
Evidence:
󰇛󰇜   
Posterior:
󰇛
󰇜



So, despite two other factories existing, the high mixture share and higher defect rate make
the most likely source of the defective bulb. That’s Bayes in action: evidence updates
belief.
Mathematical expectation
Meaning and definition
Mathematical expectation (or expected value) is the long-run average value of a random
variable you would anticipate over many repetitions of the underlying random process.
For a discrete random variable with probability mass function 󰇛󰇜 󰇛 󰇜:
󰇟󰇠 󰇛󰇜
For a continuous random variable with probability density function 󰇛󰇜:
󰇟󰇠 󰇛󰇜

Intuitively, expectation weighs each possible outcome by its probability and sums (or
integrates) those contributions.
Why expectation matters
Center of gravity: It acts like the balance point of the distribution.
Forecasting: It captures the average outcome you’d plan for.
Easy2Siksha.com
Optimization: Many decisions (e.g., minimizing expected loss) hinge on expected
values.
Properties of mathematical expectation
Linearity
Additivity and scaling: For any random variables and constants ,
󰇟󰇠 󰇟󰇠󰇟󰇠
This holds whether or not and are independent. Linearity makes expectation algebra a
reliable workhorse.
Expectation of a constant
Constants are their own expectations: If is nonrandom,
󰇟󰇠 
Indicator trick
Expectations equal probabilities for indicators: If
is the indicator of event (1 if
occurs, else 0),
󰇟
󰇠 󰇛󰇜
This bridges events and random variables elegantly.
Products and independence
Expectation of a product factors under independence: If and are independent,
󰇟󰇠 󰇟󰇠󰇟󰇠
Without independence, this identity need not hold.
Law of the unconscious statistician (LOTUS)
Expectations of functions without changing variables: For a function ,
󰇟󰇛󰇜󰇠 󰇛󰇜󰇛󰇜
(discrete)󰇟󰇛󰇜󰇠 󰇛󰇜󰇛󰇜

(continuous)
You don’t need the distribution of 󰇛󰇜to find 󰇟󰇛󰇜󰇠.
Conditional expectation
Easy2Siksha.com
Expectation with information: 󰇟 󰇠is a random variable (a function of )
representing the expected value of after seeing .
Tower property (iterated expectations):
󰇟󰇟 󰇠󰇠 󰇟󰇠
This is powerful for breaking complex expectations into manageable parts.
Jensen’s inequality
Convex functions push expectations upward: If is convex,
󰇛󰇟󰇠󰇜 󰇟󰇛󰇜󰇠
For concave , the inequality reverses. Jensen connects shape (convexity) with averaging
behavior.
Variance and expectation
Variance is an expectation of squared deviation:
󰇛󰇜 󰇟󰇛󰇟󰇠󰇜
󰇠 󰇟
󰇠󰇛󰇟󰇠󰇜
Expectation is central to understanding spread as well as center.
A relatable example
Suppose a game pays ₹10 with probability 0.3, ₹2 with probability 0.6, and ₹0 otherwise.
Expected payoff:
󰇟󰇠   
If playing costs ₹4, the expected net gain is ₹. Over many plays, you’d expect to come out
slightly ahead, even though any single play is uncertain.
Putting Bayes and expectation together
Bayesian updating uses Bayes’ theorem to revise beliefs (probabilities) in light of
evidence.
Decision-making under uncertainty often uses expectation to choose actions (e.g.,
maximize expected utility) based on those updated beliefs.
In practice, Bayes’ theorem tells you “what to believe” after seeing data, while expectation
tells you “what to plan for” given those beliefs.
Conclusion
Easy2Siksha.com
Bayes’ theorem provides a principled rule for updating probabilities: you start with priors,
weigh them by how compatible the evidence is (likelihoods), and normalize to get
posteriors. Mathematical expectation captures the average outcome of a random variable
and comes with robust, intuitive propertiesespecially linearity, LOTUS, and the tower
propertythat make it indispensable in probability, statistics, and decision science.
SECTION-C
5. What is normal distribuon ? Draw a rough sketch of its probability density funcon.
Also give important properes of normal distribuon.
Ans: 🌟 What is Normal Distribution?
Imagine you are a teacher who conducts a test for 100 students. Do all students score the
same marks? Of course not! Some score very high, some very low, but most students score
somewhere in the middle.
If you arrange their marks on a graph:
Scores on the horizontal axis (X-axis)
Number of students on the vertical axis (Y-axis)
You will notice something magical…
Most students are near the average, fewer students are at the top, and fewer at the very
bottom. The graph slowly rises, reaches a peak in the center (at the average marks), and
then gently falls again.
This beautiful, smooth, hill-shaped curve is called the:
Normal Distribution
or
Gaussian Distribution
or simply
The Bell Curve (because it looks like a bell 🔔)
So in very simple words:
👉 Normal distribution is a probability distribution where most values lie around the
average, and very few values lie at the extremes.
Easy2Siksha.com
It is one of the most important concepts in statistics because many real-life things naturally
follow it, such as:
Heights of people
Intelligence (IQ scores)
Weight of newborn babies
Measurement errors
Blood pressure readings
Marks of students in large examinations
That’s why normal distribution is called the backbone of statistics.
🎨 Rough Sketch of Normal Distribution (Mentally Visualize It!)
Since I can’t draw directly here, imagine a smooth hill:
It:
Is highest in the middle
Is perfectly symmetric
Slopes down equally on both sides
Has long tails that never touch the X-axis
This shape represents the Probability Density Function (PDF) of the normal distribution.
📏 The Mathematical Form (Just for Awareness)
Even though you don’t need to memorize it deeply, the probability density function of a
normal distribution with mean μ and standard deviation σ is:
󰇛󰇜

󰇡󰇜

Easy2Siksha.com
But don’t panic 😄
Just remember:
This formula simply helps create that beautiful bell curve mathematically.
Important Properties of Normal Distribution (Explained Simply)
Now let’s discuss the key features that make normal distribution special.
It is Perfectly Symmetrical
The bell curve is not tilted to the left or right. It is like a mirror image.
This means:
Left side = Right side
The number of values below the average is equal to the number above it
If you fold the curve from the middle, both halves match!
So it represents fairness and balance in data.
Mean = Median = Mode
In many distributions, the average, median, and most frequent values are different.
But in a normal distribution:
👉 Mean (average)
👉 Median (middle value)
👉 Mode (most frequent value)
All three are the same and lie at the center of the curve.
This makes normal distribution mathematically very convenient and beautiful.
Bell-Shaped and Unimodal
The curve has only one peak, meaning it has only one mode (unimodal).
Easy2Siksha.com
So:
Only one most common value exists
Data is concentrated around this central point
Spread of Data Depends on Standard Deviation (σ)
Standard deviation (σ) tells how spread out the data is.
Small σ → Narrow and tall bell curve (data tightly packed near mean)
Large σ → Wide and flat bell curve (data spread out)
So σ controls the “fatness” of the bell 😄
Total Area Under the Curve = 1
This simply means:
👉 The probability of all outcomes together = 100%
Every single value lies somewhere under the curve.
The Famous 689599.7 Rule 📊
(Also called Empirical Rule)
This is one of the most popular and scoring properties.
For a normal distribution:
68% of data lies within ±1 standard deviation from mean
(between μ − σ and μ + σ)
95% of data lies within ±2 standard deviations
(between μ − 2σ and μ + 2σ)
99.7% of data lies within ±3 standard deviations
(between μ − 3σ and μ + 3σ)
In simple words:
Almost all data lies near the mean!
Easy2Siksha.com
Example:
If average height = 170 cm and σ = 5 cm, then:
68% people are between 165 and 175
95% are between 160 and 180
99.7% are between 155 and 185
Amazing, right?
Tails Extend Infinitely
The curve never actually touches the X-axis.
It gets very close but never becomes zero.
This means:
👉 Extreme values are rare but not impossible
For example:
A very tall person like 7 feet is rare but still possible.
Determined by Two Parameters Only (μ and σ)
A normal distribution is completely defined by just:
μ → Mean (location of center)
σ → Standard deviation (spread)
If you know these two, you know everything about the curve!
Continuous Distribution
Normal distribution is a continuous probability distribution, meaning:
Values are not discrete numbers
They can take any real value
Like height can be:
170.2 cm
170.25 cm
170.253 cm
Easy2Siksha.com
No gaps in between.
🎯 Why is Normal Distribution So Important?
Because nature loves balance.
Most real-life phenomena behave normally.
Also:
It forms the base of many statistical tests
Helps in quality control
Used in psychological testing
Used in economics and finance
Used in medical studies
Used in artificial intelligence and machine learning
Whenever something is “naturally occurring” and involves large numbers, normal
distribution quietly appears.
6. Dene Poisson distribuon. Under what condions is it applicable? Obtain Poisson
distribuon as a liming case of binomial distribuon. Also discuss the properes of
Poisson distribuon.
Ans: 🌟 Poisson Distribution: Definition, Applicability, Derivation, and Properties
🌟 Introduction
Probability theory gives us different distributions to model real-life random events. One of
the most fascinating and widely used is the Poisson distribution. It is often called the “law
of rare events” because it describes situations where events happen infrequently but
independently over time or space.
👉 In simple words: The Poisson distribution helps us answer questions like, “How many
phone calls will a call center receive in the next minute?” or “How many printing errors will
appear on a page?”
🌟 Definition of Poisson Distribution
A random variable is said to follow a Poisson distribution with parameter (lambda) if its
probability mass function (PMF) is given by:
Easy2Siksha.com
󰇛 󰇜


Here:
is the average number of occurrences in a given interval.
is the mathematical constant (approximately 2.718).
is the factorial of .
👉 Example: If a hospital receives on average 4 emergency cases per hour, then the
probability of exactly 2 cases in an hour is:
󰇛 󰇜






🌟 Conditions of Applicability
The Poisson distribution is applicable under the following conditions:
1. Events are rare: The probability of occurrence is small. 👉 Example: Number of
accidents at a traffic signal in one day.
2. Events occur independently: The occurrence of one event does not affect another.
👉 Example: Printing errors on different pages of a book.
3. Events occur at a constant average rate: The mean number of occurrences per
interval remains stable. 👉 Example: Average number of emails received per hour.
4. Events occur singly: Two or more events cannot occur at exactly the same instant.
👉 In short: Poisson distribution is best for modeling rare, independent events happening
over time or space.
🌟 Poisson Distribution as a Limiting Case of Binomial Distribution
The binomial distribution models the number of successes in independent trials with
probability . Its PMF is:
󰇛 󰇜 󰇡
󰇢
󰇛󰇜


Now, let’s see how Poisson emerges as a limiting case:
Step 1: Setup
Suppose is very large (many trials).
Probability of success is very small (rare event).
The product  is finite (average number of successes).
Step 2: Rewrite Binomial PMF
Easy2Siksha.com
󰇛 󰇜

󰇛󰇜
󰇛󰇜

Step 3: Apply Limits
As , , but  .
Approximate 󰇜


.
Simplify factorial terms using limit properties.
Step 4: Result
󰇛 󰇜

This is exactly the PMF of the Poisson distribution.
👉 Interpretation: When events are rare but trials are many, the binomial distribution
“converges” to the Poisson distribution.
🌟 Properties of Poisson Distribution
1. Mean and Variance
Mean () =
Variance (
) =
👉 Unique feature: In Poisson distribution, mean and variance are equal.
2. Additive Property
If
Poisson󰇛
󰇜and
Poisson󰇛
󰇜, then:
Poisson󰇛
󰇜
👉 Example: If one call center receives on average 3 calls per hour and another receives 5,
then together they receive on average 8 calls per hour, modeled by Poisson(8).
3. Memoryless Nature (Approximate)
While strictly memoryless property belongs to the exponential distribution, Poisson shares a
similar idea: the probability of future events does not depend on past occurrences.
4. Skewness and Kurtosis
Skewness = 
Kurtosis = 
Easy2Siksha.com
👉 For large , the distribution becomes more symmetric and resembles the normal
distribution.
5. Relation to Normal Distribution
For large , Poisson can be approximated by the normal distribution:
󰇛󰇜
👉 Example: If , probabilities can be approximated using the normal curve.
6. Rare Event Law
Poisson is often used to model rare events like:
Number of typing errors per page.
Number of earthquakes in a region per year.
Number of defective items in a large batch.
🌟 Everyday Examples of Poisson Distribution
1. Traffic Flow: Number of cars passing a toll booth per minute.
2. Call Centers: Number of calls received in a given time.
3. Biology: Number of mutations in a DNA strand.
4. Sports: Number of goals scored in a football match.
5. Retail: Number of customers arriving at a shop in an hour.
👉 These examples show how Poisson distribution connects mathematics with real life.
📖 A Relatable Analogy
Think of raindrops falling on your window. Each drop falls independently, and you don’t
know exactly when the next one will fall. But if you observe long enough, you notice an
average ratesay 10 drops per minute. The Poisson distribution is like a mathematical
umbrella that helps you predict the probability of seeing exactly 8, 10, or 12 drops in the
next minute.
🌟 Conclusion
The Poisson distribution is a cornerstone of probability theory, especially for modeling rare,
independent events. It arises naturally as a limiting case of the binomial distribution when
the number of trials is large and the probability of success is small. Its propertiesequal
mean and variance, additive nature, and approximation to normal distributionmake it
versatile and powerful.
Easy2Siksha.com
SECTION-D
7.(a) Disnguish between a census and a sampling enquiry and discuss their comparave
advantages.
(b) How will you select a sample using straed random sampling?
Ans: Part (a): Distinguish between Census and Sampling Enquiry & Their Advantages
Imagine your teacher wants to know how many students in your college are satisfied with
the canteen food. There are two ways to do this:
Ask every student individually.
Ask only a selected group of students and conclude based on their responses.
These two approaches represent Census and Sampling.
What is a Census?
A census is a complete investigation. It means collecting information from each and every
unit of the population.
In simple words:
Census = Study of all individuals
For example:
Government population census (every person in the country is counted)
School asking every student to submit feedback
A company checking every product for defects
So in census, nothing is left. Every unit is examined.
What is a Sampling Enquiry?
Sampling enquiry means studying only a part of the population, not everyone.
Instead, we select a sample (a small representative group) and draw conclusions about the
whole population from it.
In simple words:
Sampling = Study of some individuals to represent all
Easy2Siksha.com
For example:
Election opinion polls ask only a few thousand voters, not every citizen.
Doctors take a small blood sample instead of taking all your blood.
Quality checking in factories tests only a few items out of thousands produced.
If the sample is selected wisely, the results are very close to reality.
Differences Between Census and Sampling Enquiry
Let us clearly distinguish both in simple points:
1. Coverage
Census: Covers every unit of the population
Sampling: Covers only selected units
2. Time Required
Census: Takes a lot of time
Government census happens once in 10 years because it is so time-consuming.
Sampling: Much faster
Few thousand people can be surveyed within days.
3. Cost
Census: Very expensive
Needs manpower, travel, forms, data processing, etc.
Sampling: Much cheaper
Requires fewer resources and less money.
4. Accuracy
Census: Generally more accurate because everyone is included.
But sometimes, due to large handling of data, mistakes can occur.
Sampling: Accuracy depends on how well the sample is chosen.
If the sample is scientifically selected, results are highly reliable.
Easy2Siksha.com
5. Suitability
Census: Suitable when population is small or accuracy is extremely important.
Example: national population census, military records, school student strength.
Sampling: Suitable when population is large and time/cost is limited.
Example: opinion polls, market research, medical tests.
6. Destruction Possibility
Sometimes testing destroys items (like testing bulbs or medicines).
Census: Impossible, because everything would get destroyed.
Sampling: Perfect, because only few units are tested.
Comparative Advantages
Advantages of Census
Highly accurate and reliable
Useful when detailed information is required
Better for small populations
Helpful in policymaking (like government planning)
Advantages of Sampling Enquiry
Less Time-Consuming results come quickly
Less Costly requires fewer resources
More Practical for large populations
Equally Reliable if done scientifically
Useful in Destructive Testing
So, in short:
Census is like checking every grain in a sack of wheat.
Sampling is like checking only a handful to judge the quality.
Both are useful, but the choice depends on time, cost, and purpose.
Easy2Siksha.com
Part (b): How to Select a Sample Using Stratified Random Sampling?
Now let’s understand stratified random sampling in a friendly manner.
Imagine your college has different groups:
Boys and Girls
Science, Arts, and Commerce students
First year, Second year, Final year students
If we randomly select students without considering these groups, we may accidentally pick:
More boys than girls
More science students than arts students
Or maybe mostly first-year students
Then the result will not represent the entire college fairly.
This is where Stratified Random Sampling helps.
What is Stratified Random Sampling?
In stratified random sampling, we divide the population into different groups called ‘strata’
based on some characteristics.
Then we pick random samples from each group.
In simple words:
Divide population into groups
Select random samples from each group
Combine them this becomes the final sample
Steps of Stratified Random Sampling
Let’s make it very simple and step-by-step:
Step 1: Identify the Population
First, decide whom you want to study.
Example: All students of a university.
Easy2Siksha.com
Step 2: Divide Population into Strata
Create groups based on characteristics.
You may divide based on:
Gender
Class / Year
Income level
Region (urban/rural)
Age group
Each group must be:
distinct
non-overlapping
covering the whole population
For example:
If we divide students by class:
1st year
2nd year
3rd year
Every student belongs to only one group.
Step 3: Decide Sample Size
Suppose you want a total sample of 300 students.
Step 4: Decide Number of Students from Each Stratum
There are two ways:
Equal Allocation
Take equal students from each group
Proportional Allocation
Take samples according to the size of each group
Example:
If 1st year has more students than 3rd year, take more from 1st year.
This makes the sample more representative.
Easy2Siksha.com
Step 5: Select Random Sample from Each Stratum
Now, from each group, select students randomly.
You may use:
lottery method
random number table
computer random tool
No bias should be there.
Step 6: Combine All Selected Units
Finally, combine all selected groups.
This final group becomes your Stratified Random Sample.
Why is Stratified Sampling Useful?
Because it:
Ensures fair representation
Reduces bias
Gives more accurate results
Useful when population is diverse
Simple Real-Life Example
Suppose a company has:
60% male workers
40% female workers
We want a sample of 100 workers.
Using stratified sampling:
We divide into 2 strata: Male & Female
We take 60 males and 40 females randomly
Now the sample truly represents the company
Easy2Siksha.com
If we randomly selected without stratification, we may accidentally pick 80 males and only
20 females. That would be unfair.
Conclusion
Census and Sampling are two powerful methods of data collection. Census covers everyone
and gives detailed and accurate results but requires lots of time and money. Sampling saves
time, cost, and effort and is practical for large populations. Stratified random sampling is a
smart sampling method that divides population into meaningful groups and then selects
random samples from each group to ensure fairness and accuracy.
8.(a) Disnguish between random sampling and subjecve sampling. How will you select a
sample from the populaon using simple random sampling technique (with replacement
and without replacement)?
(b) Explain the concept of standard error of esmates.
Ans: 🌟 Introduction
In statistics, sampling is like taking a small slice of a cake to judge the flavor of the whole.
Since studying an entire population is often impractical, we select a sample that represents
it. But the way we select this sample matters a lotit can determine whether our
conclusions are reliable or misleading. Two common approaches are random sampling and
subjective sampling, and one of the most widely used random methods is simple random
sampling. Alongside sampling, statisticians also use the concept of standard error of
estimates to measure how accurate their sample-based predictions are.
👉 In simple words: Sampling tells us how to pick, and standard error tells us how good our
pick is.
🌟 (a) Random Sampling vs. Subjective Sampling
1. Random Sampling
Definition: Every unit in the population has an equal chance of being selected.
Nature: Objective, unbiased, and based on probability.
Example: Drawing names from a hat to select students for a survey.
Advantages:
o Eliminates personal bias.
o Results are more representative of the population.
o Statistical theory can be applied to measure accuracy.
Easy2Siksha.com
👉 Random sampling is like rolling a fair diceno favoritism, just pure chance.
2. Subjective Sampling
Definition: The researcher selects samples based on personal judgment,
convenience, or preference.
Nature: Non-random, biased, and not based on probability.
Example: A teacher choosing the “best” students to represent the class in a
competition.
Advantages:
o Quick and easy to implement.
o Useful when probability-based methods are impractical.
Disadvantages:
o Highly prone to bias.
o Results may not represent the population accurately.
👉 Subjective sampling is like picking your favorite fruit from a basketit reflects personal
choice, not fairness.
Key Distinction
Random Sampling
Subjective Sampling
Probability
Personal judgment
Minimal
High
High chance of representing population
Often unrepresentative
Statistical inference possible
Limited inference
🌟 Simple Random Sampling Technique
Simple random sampling is the most basic and widely used random method. It ensures that
each unit in the population has an equal chance of being selected. There are two variations:
with replacement and without replacement.
1. With Replacement (SRSWR)
Process:
1. Each unit is selected randomly.
2. After selection, the unit is returned to the population before the next draw.
3. This means the same unit can be chosen more than once.
Example: Suppose we want to select 3 students from a class of 10. If we use SRSWR,
after picking one student, we put their name back in the hat. So, the same student
might be picked again.
Advantages:
o Each draw is independent.
o Probability calculations are simpler.
Easy2Siksha.com
👉 Think of it like drawing cards from a deck and putting them back each timeyou might
draw the same card again.
2. Without Replacement (SRSWOR)
Process:
1. Each unit is selected randomly.
2. Once selected, the unit is not returned to the population.
3. This means no unit can be chosen more than once.
Example: Selecting 3 students from a class of 10 without replacement means once a
student is picked, they cannot be picked again.
Advantages:
o Ensures diversity in the sample.
o More representative of the population.
👉 Think of it like drawing cards from a deck and keeping them aside—you’ll never draw
the same card twice.
Comparison of SRSWR and SRSWOR
Aspect
With Replacement (SRSWR)
Without Replacement (SRSWOR)
Independence
Each draw independent
Draws are dependent
Repetition
Units may repeat
No repetition
Sample Diversity
Lower
Higher
Probability
Easier to calculate
Slightly more complex
🌟 (b) Standard Error of Estimates
Definition
The standard error (SE) is a measure of how much a sample statistic (like the sample mean)
is expected to vary from the true population parameter.
👉 In simple words: It tells us how “off” our sample estimate might be from the actual
truth.
Formula for Standard Error of the Mean
If the population has standard deviation and sample size :

Larger sample size → smaller SE (more accuracy).
Smaller sample size → larger SE (less accuracy).
Importance of Standard Error
Easy2Siksha.com
1. Measures Accuracy: SE shows how close the sample mean is likely to be to the
population mean.
2. Basis for Confidence Intervals: SE helps construct intervals around estimates to
indicate reliability.
3. Hypothesis Testing: SE is used to calculate test statistics like t-values and z-scores.
4. Comparison of Samples: SE allows us to judge whether differences between sample
means are significant.
Properties of Standard Error
1. Depends on Sample Size: Larger samples reduce SE.
2. Depends on Population Variability: Greater variability increases SE.
3. Random Nature: SE itself is an estimate, not a fixed value.
4. Foundation of Inferential Statistics: SE connects sample data to population
conclusions.
🌟 Everyday Example of Standard Error
Imagine you want to estimate the average height of students in a school. You measure 30
students and find the average height is 160 cm. But you know this is just a sample. The
standard error tells you how much this sample average might differ from the true average
height of all students.
If SE = 2 cm, it means the sample mean is likely within ±2 cm of the true mean.
If SE = 10 cm, the sample mean could be far off, making your estimate unreliable.
👉 Thus, SE acts like a “margin of uncertainty” around your sample estimate.
📖 A Relatable Analogy
Think of sampling as tasting soup before serving it.
Random sampling: You stir the pot and take a spoonfulfair chance of representing
the whole soup.
Subjective sampling: You scoop from the top without stirringbiased, may not
represent the soup.
Standard error: The uncertainty about whether your spoonful truly reflects the
flavor of the entire pot.
🌟 Conclusion
Random sampling ensures fairness and representativeness, while subjective
sampling relies on personal judgment and risks bias.
Simple random sampling can be done with or without replacement, each with its
own advantages.
The standard error of estimates is a crucial concept that measures the reliability of
sample-based predictions.
Easy2Siksha.com
This paper has been carefully prepared for educaonal purposes. If you noce any
mistakes or have suggesons, feel free to share your feedback.